11 research outputs found

    Generative Street Addresses from Satellite Imagery

    Get PDF
    We describe our automatic generative algorithm to create street addresses from satellite images by learning and labeling roads, regions, and address cells. Currently, 75% of the world’s roads lack adequate street addressing systems. Recent geocoding initiatives tend to convert pure latitude and longitude information into a memorable form for unknown areas. However, settlements are identified by streets, and such addressing schemes are not coherent with the road topology. Instead, we propose a generative address design that maps the globe in accordance with streets. Our algorithm starts with extracting roads from satellite imagery by utilizing deep learning. Then, it uniquely labels the regions, roads, and structures using some graph- and proximity-based algorithms. We also extend our addressing scheme to (i) cover inaccessible areas following similar design principles; (ii) be inclusive and flexible for changes on the ground; and (iii) lead as a pioneer for a unified street-based global geodatabase. We present our results on an example of a developed city and multiple undeveloped cities. We also compare productivity on the basis of current ad hoc and new complete addresses. We conclude by contrasting our generative addresses to current industrial and open solutions. Keywords: road extraction; remote sensing; satellite imagery; machine learning; supervised learning; generative schemes; automatic geocodin

    View Rendering for 3DTV

    No full text
    Advancements in three dimensional (3D) technologies are rapidly increasing. Three Dimensional Television (3DTV) aims at creating 3D experience for the home user. Moreover, multiview autostereoscopic displays provide a depth impression without the requirement for any special glasses and can be viewed from multiple locations. One of the key issues in the 3DTV processing chain is the content generation from the available input data format video plus depth and multiview video plus depth. This data allows for the possibility of producing virtual views using depth-image-based rendering. Although depth-image-based rendering is an efficient method, it is known for appearance of artifacts such as cracks, corona and empty regions in rendered images. While several approaches have tackled the problem, reducing the artifacts in rendered images is still an active field of research.   Two problems are addressed in this thesis in order to achieve a better 3D video quality in the context of view rendering: firstly, how to improve the quality of rendered views using a direct approach (i.e. without applying specific processing steps for each artifact), and secondly, how to fill the large missing areas in a visually plausible manner using neighbouring details from around the missing regions. This thesis introduces a new depth-image-based rendering and depth-based texture inpainting in order to address these two problems. The first problem is solved by an edge-aided rendering method that relies on the principles of forward warping and one dimensional interpolation. The other problem is addressed by using the depth-included curvature inpainting method that uses appropriate depth level texture details around disocclusions.   The proposed edge-aided rendering method and depth-included curvature inpainting methods are evaluated and compared with the state-of-the-art methods. The results show an increase in the objective quality and the visual gain over reference methods. The quality gain is encouraging as the edge-aided rendering method omits the specific processing steps to remove the rendering artifacts. Moreover, the results show that large disocclusions can be effectively filled using the depth-included curvature inpainting approach. Overall, the proposed approaches improve the content generation for 3DTV and additionally, for free view point television

    Free View Rendering for 3D Video : Edge-Aided Rendering and Depth-Based Image Inpainting

    No full text
    Three Dimensional Video (3DV) has become increasingly popular with the success of 3D cinema. Moreover, emerging display technology offers an immersive experience to the viewer without the necessity of any visual aids such as 3D glasses. 3DV applications, Three Dimensional Television (3DTV) and Free Viewpoint Television (FTV) are auspicious technologies for living room environments by providing immersive experience and look around facilities. In order to provide such an experience, these technologies require a number of camera views captured from different viewpoints. However, the capture and transmission of the required number of views is not a feasible solution, and thus view rendering is employed as an efficient solution to produce the necessary number of views. Depth-image-based rendering (DIBR) is a commonly used rendering method. Although DIBR is a simple approach that can produce the desired number of views, inherent artifacts are major issues in the view rendering. Despite much effort to tackle the rendering artifacts over the years, rendered views still contain visible artifacts. This dissertation addresses three problems in order to improve 3DV quality: 1) How to improve the rendered view quality using a direct approach without dealing each artifact specifically. 2) How to handle disocclusions (a.k.a. holes) in the rendered views in a visually plausible manner using inpainting. 3) How to reduce spatial inconsistencies in the rendered view. The first problem is tackled by an edge-aided rendering method that uses a direct approach with one-dimensional interpolation, which is applicable when the virtual camera distance is small. The second problem is addressed by using a depth-based inpainting method in the virtual view, which reconstructs the missing texture with background data at the disocclusions. The third problem is undertaken by a rendering method that firstly inpaint occlusions as a layered depth image (LDI) in the original view, and then renders a spatially consistent virtual view. Objective assessments of proposed methods show improvements over the state-of-the-art rendering methods. Visual inspection shows slight improvements for intermediate views rendered from multiview videos-plus-depth, and the proposed methods outperforms other view rendering methods in the case of rendering from single view video-plus-depth. Results confirm that the proposed methods are capable of reducing rendering artifacts and producing spatially consistent virtual views. In conclusion, the view rendering methods proposed in this dissertation can support the production of high quality virtual views based on a limited number of input views. When used to create a multi-scopic presentation, the outcome of this dissertation can benefit 3DV technologies to improve the immersive experience

    Active Noise Control of an Insulated Box Fan using Feedforward and Feedback Control

    No full text
    In recent years, the active noise control methods are more attractive in reducing unwanted noise. Salient features of active control methods over passive control methods drive more attention to active noise control systems for reduction of low frequency noise. The main work in this thesis is active noise reduction of sound in the ventilation systems. Two different kinds of active noise control methods were introduced to reduce the unwanted noise and the implementations were successful. In conclusion, choose the best method based on the best attenuation achievement using different setups of the du

    Active Noise Control of an Insulated Box Fan using Feedforward and Feedback Control

    No full text
    In recent years, the active noise control methods are more attractive in reducing unwanted noise. Salient features of active control methods over passive control methods drive more attention to active noise control systems for reduction of low frequency noise. The main work in this thesis is active noise reduction of sound in the ventilation systems. Two different kinds of active noise control methods were introduced to reduce the unwanted noise and the implementations were successful. In conclusion, choose the best method based on the best attenuation achievement using different setups of the du

    Depth-Based Inpainting For Disocclusion Filling

    No full text
    Depth-based inpainting methods can solve disocclusion problems occurring in depth-image-based rendering. However, inpainting in this context suffers from artifacts along foreground objects due to foreground pixels in the patch matching. In this paper, we address the disocclusion problem by a refined depth-based inpainting method. The novelty is in classifying the foreground and background by using available local depth information. Thereby, the foreground information is excluded from both the source region and the target patch. In the proposed inpainting method, the local depth constraints imply inpainting only the background data and preserving the foreground object boundaries. The results from the proposed method are compared with those from the state-of-the art inpainting methods. The experimental results demonstrate improved objective quality and a better visual quality along the object boundaries

    Edge-preserving depth-image-based rendering method

    No full text
    Distributionof future 3DTV is likely to use supplementary depth information to a videosequence. New virtual views may then be rendered in order to adjust todifferent 3D displays. All depth-imaged-based rendering (DIBR) methods sufferfrom artifacts in the resulting images, which are corrected by differentpost-processing. The proposed method is based on fundamental principles of3D-warping. The novelty lies in how the virtual view sample values are obtainedfrom one-dimensional interpolation, where edges are preserved by introducing specificedge-pixels with information about both foreground and background data. Thisavoids fully the post-processing of filling cracks and holes. We comparedrendered virtual views of our method and of the View Synthesis ReferenceSoftware (VSRS) and analyzed the results based on typical artifacts. Theproposed method obtained better quality for photographic images and similarquality for synthetic images

    Active Noise Control of a Radial Fan

    No full text
    This thesis work aims at investigating the use of an active noise control (ANC) system on a radial fan. This was done by studying the fan structure and its potential working environment (the ducts). This includes measuring the sound levels on several positions and select suitable positions to apply the ANC system. Moreover, the tested ANC system was implemented on the ventilation system and acceptable results were obtained. Further analyses were made based on the obtained results and some explanations were derived to investigate the reason behind the ANC systems incapability to attenuate the noise generated by the fan at some frequencies

    Depth-Included Curvature Inpainting for Disocclusion Filling in View Synthesis

    No full text
    Depth-image-based-rendering (DIBR) is the commonly used for generating additional views for 3DTV and FTV using 3D video formats such as video plus depth (V+D) and multi view-video-plus-depth (MVD). The synthesized views suffer from artifacts mainly with disocclusions when DIBR is used. Depth-based inpainting methods can solve these problems plausibly. In this paper, we analyze the influence of the depth information at various steps of the depth-included curvature inpainting method. The depth-based inpainting method relies on the depth information at every step of the inpainting process: boundary extraction for missing areas, data term computation for structure propagation and in the patch matching to find best data. The importance of depth at each step is evaluated using objective metrics and visual comparison. Our evaluation demonstrates that depth information in each step plays a key role. Moreover, to what degree depth can be used in each step of the inpainting process depends on the depth distribution

    Edge-aided virtual view rendering for multiview video plus depth

    No full text
    Depth-Image-Based Rendering (DIBR) of virtual views is a fundamental method in three dimensional 3-D video applications to produce dierent perspectives from texture and depth information, in particular the multi-viewplus-depth (MVD) format. Artifacts are still present in virtual views as a consequence of imperfect rendering using existing DIBR methods. In this paper, we propose an alternative DIBR method for MVD. In the proposed method we introduce an edge pixel and interpolate pixel values in the virtual view using the actual projected coordinates from two adjacent views, by which cracks and disocclusions are automatically lled. In particular, we propose a method to merge pixel information from two adjacent views in the virtual view before the interpolation; we apply a weighted averaging of projected pixels within the range of one pixel in the virtual view. We compared virtual view images rendered by the proposed method to the corresponding view images rendered by state-of-theart methods. Objective metrics demonstrated an advantage of the proposed method for most investigated media contents. Subjective test results showed preference to dierent methods depending on media content, and the test could not demonstrate a signicant dierence between the proposed method and state-of-the-art methods
    corecore